53 research outputs found

    Stochastic Dynamic Cache Partitioning for Encrypted Content Delivery

    Full text link
    In-network caching is an appealing solution to cope with the increasing bandwidth demand of video, audio and data transfer over the Internet. Nonetheless, an increasing share of content delivery services adopt encryption through HTTPS, which is not compatible with traditional ISP-managed approaches like transparent and proxy caching. This raises the need for solutions involving both Internet Service Providers (ISP) and Content Providers (CP): by design, the solution should preserve business-critical CP information (e.g., content popularity, user preferences) on the one hand, while allowing for a deeper integration of caches in the ISP architecture (e.g., in 5G femto-cells) on the other hand. In this paper we address this issue by considering a content-oblivious ISP-operated cache. The ISP allocates the cache storage to various content providers so as to maximize the bandwidth savings provided by the cache: the main novelty lies in the fact that, to protect business-critical information, ISPs only need to measure the aggregated miss rates of the individual CPs and do not need to be aware of the objects that are requested, as in classic caching. We propose a cache allocation algorithm based on a perturbed stochastic subgradient method, and prove that the algorithm converges close to the allocation that maximizes the overall cache hit rate. We use extensive simulations to validate the algorithm and to assess its convergence rate under stationary and non-stationary content popularity. Our results (i) testify the feasibility of content-oblivious caches and (ii) show that the proposed algorithm can achieve within 10\% from the global optimum in our evaluation

    Cost-aware caching: optimizing cache provisioning and object placement in ICN

    Full text link
    Caching is frequently used by Internet Service Providers as a viable technique to reduce the latency perceived by end users, while jointly offloading network traffic. While the cache hit-ratio is generally considered in the literature as the dominant performance metric for such type of systems, in this paper we argue that a critical missing piece has so far been neglected. Adopting a radically different perspective, in this paper we explicitly account for the cost of content retrieval, i.e. the cost associated to the external bandwidth needed by an ISP to retrieve the contents requested by its customers. Interestingly, we discover that classical cache provisioning techniques that maximize cache efficiency (i.e., the hit-ratio), lead to suboptimal solutions with higher overall cost. To show this mismatch, we propose two optimization models that either minimize the overall costs or maximize the hit-ratio, jointly providing cache sizing, object placement and path selection. We formulate a polynomial-time greedy algorithm to solve the two problems and analytically prove its optimality. We provide numerical results and show that significant cost savings are attainable via a cost-aware design

    Assessing transportation accessibility equity via open data

    Get PDF
    We propose a methodology to assess transportation accessibility inequity in metropolitan areas. The methodology is based on the classic analysis tools of Lorenz curves and Gini indices, but the novelty resides in the fact that it can be easily applied in an automated way to several cities around the World, with no need for customized data treatment. Indeed, our equity metrics can be computed solely relying on open data, publicly available in standardized form. We showcase our method and study transportation equity of four cities, comparing our findings with a recently proposed approach

    On the importance of demand consolidation in Mobility on Demand

    Get PDF
    International audienceMobility on Demand (MoD) services, like Uber and Lyft, are revolutionizing the way people move in cities around the world and are often considered a convenient alternative to public transit, since they offer higher Quality of Service (QoS-less waiting time, door-to-door service) at a cheap price. In the next decades, these advantages are expected to be further amplified by Automated MoD (AMoD), in which drivers will be replaced by automated vehicles, with a big gain in terms of cost-efficiency. MoD is usually intended as a door-to-door service. However, there has been recent interest toward consolidating, e.g., aggregating, the travel demand by limiting the number of admitted stop locations. This implies users have to walk from/to their intended origin/destination. The contribution of this paper is a systematic study the impact of consolidation on the operator cost and on user QoS. We introduce a MoD system where pickups and drop-offs can only occur in a limited subset of admitted stop locations. The density of such locations is a system parameter: the less the density, the more the user demand is consolidated. We show that, by decreasing stop density, we can increase system capacity (number of passengers we are able to serve). On the contrary, increasing it, we can improve QoS. The system is tested in AMoDSim, an open-source simulator. The code to reproduce the results presented here is available on-line. This work is a first step toward flexible mobility services that are able to autonomously re-configure themselves, favoring capacity or QoS, depending on the amount of travel demand coming from users. In other words, the services we envisage in this work shift their operational mode to any intermediate point in the range from a taxi-like door-to-door service to a bus-like service, with few served stops and more passengers on-board

    EdgeMORE: improving resource allocation with multiple options from tenants

    Get PDF
    International audienceUnder the paradigm of Edge Computing (EC), a Network Operator (NO) deploys computational resources at the network edge and let third-party Service Providers (SPs) run on top of them, as tenants. Besides the clear advantages for SPs and final users thanks to the vicinity of computation nodes, a NO aims to allocate edge resources in order to increase its own utility, including bandwidth saving, operational cost reduction, QoE for its users, etc. However, while the number of third-party services competing for edge resources is expected to dramatically grow, the resources deployed cannot increase accordingly, due to physical limitations. Therefore, smart strategies are needed to fully exploit the potential of EC, despite its constrains. To this aim, we propose to leverage service adaptability, a dimension that has mainly been neglected so far: each service can adapt to the amount of resources that the NO has allocated to it, balancing the fraction of service computation performed at the edge and relying on remote servers, e.g., in the Cloud, for the rest. We propose EdgeMORE, a resource allocation strategy in which SPs express their capabilities to adapt to different resource constraints, by declaring the different configurations under which they are able to run, specifying the resources needed and the utility provided to the NO. The NO then chooses the most convenient option per each SP, in order to maximize the total utility. We formalize EdgeMORE as a Integer Linear Program. We show via simulation that EdgeMORE greatly improves EC utility with respect to the standard where no multiple options for running services are allowed

    AccEq-DRT: Planning Demand-Responsive Transit to reduce inequality of accessibility

    Full text link
    Accessibility measures how well a location is connected to surrounding opportunities. We focus on accessibility provided by Public Transit (PT). There is an evident inequality in the distribution of accessibility between city centers or close to main transportation corridors and suburbs. In the latter, poor PT service leads to a chronic car-dependency. Demand-Responsive Transit (DRT) is better suited for low-density areas than conventional fixed-route PT. However, its potential to tackle accessibility inequality has not yet been exploited. On the contrary, planning DRT without care to inequality (as in the methods proposed so far) can further improve the accessibility gap in urban areas. To the best of our knowledge this paper is the first to propose a DRT planning strategy, which we call AccEq-DRT, aimed at reducing accessibility inequality, while ensuring overall efficiency. To this aim, we combine a graph representation of conventional PT and a Continuous Approximation (CA) model of DRT. The two are combined in the same multi-layer graph, on which we compute accessibility. We then devise a scoring function to estimate the need of each area for an improvement, appropriately weighting population density and accessibility. Finally, we provide a bilevel optimization method, where the upper level is a heuristic to allocate DRT buses, guided by the scoring function, and the lower level performs traffic assignment. Numerical results in a simplified model of Montreal show that inequality, measured with the Atkinson index, is reduced by up to 34\%. Keywords: DRT Public, Transportation, Accessibility, Continuous Approximation, Network DesignComment: 15 page

    Conception et évaluation de systèmes de caching de réseau pour améliorer la distribution des contenus sur Internet

    No full text
    Network caching can help cope with today Internet traffic explosion and sustain the demand for an increasing user Quality of Experience. Nonetheless, the techniques proposed in the literature do not exploit all the potential benefits. Indeed, they usually aim to optimize hit ratio or other network-centric metrics, e.g. path length, latency, etc., while network operators are more focused on moremore practical metrics, like cost and quality of experience. We devise caching techniques that directly target the latter objectives and show that this allows to gain better performance. More specifically, we first propose novel strategies that reduce the Internet Service Provider (ISP) operational cost, by preferentially caching the objects whose cost of retrieval is the largest. We then focus on video delivery, since it is the most sensitive to QoE and represents most ofthe Internet traffic. Classic techniques ignore that each video is represented by different representations, encoded at different bit-rates and resolutions. We devise techniques that take this into account. Finally, we point out that the techniques presented in the literature assume the perfect knowledge of the objects that are crossing the network. Nonetheless, most of the traffic today is encrypted and thus caching techniques are inapplicable. To overcome this limit, We propose a mechanism which allows the ISPs to cache, even without knowing the objects beingLe caching de réseau peut aider àgérer l'explosion du trafic sur Internet et à satisfaire la Qualité d'Expérience (QoE)croissante demandée par les usagers. Néanmoins, les techniques proposées jusqu'à présent par la littérature scientifique n'arrivent pas à exploiter tous les avantages potentiels. Les travaux de recherche précédents cherchent à optimiser le hit ratio ou d'autres métriques deréseau, tandis que les opérateurs de réseau(ISPs) sont plus intéressés à des métriques plus concrètes, par exemple le coût et la qualité d'expérience (QoE). Pour cela, nous visons directement l'optimisation des métriques concrètes et montrons que, ce faisant, on obtientdes meilleures performances. Plus en détail, d'abord nous proposons des nouvelles techniques de caching pour réduire le coût pour les ISPs en préférant stocker les objets qui sont les plus chères à repérer. Nous montrons qu'un compromis existe entre la maximisation classique du hit ratio et la réduction du coût. Ensuite, nous étudions la distribution vidéo, comme elle est la plus sensible à la QoE et constitue la plus part du trafic Internet. Les techniques de caching classiques ignorent ses caractéristiques particulières, par exemple le fait qu'une vidéo est représentée par différentes représentations, encodées en différents bit-rates et résolutions. Nous introduisons des techniques qui prennent en compte cela. Enfin, nous remarquons que les techniques courantes assument la connaissance parfaite des objets qui traversent le réseau. Toutefois, la plupart du trafic est chiffrée et du coup toute technique de caching ne peut pas fonctionner.Nous proposons un mécanisme qui permet auxISPs de faire du caching, bien qu’ils ne puissent observer les objets envoyés
    • …
    corecore